home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Internet Surfer 2.0
/
Internet Surfer 2.0 (Wayzata Technology) (1996).iso
/
pc
/
text
/
mac
/
faqs.478
< prev
next >
Wrap
Text File
|
1996-02-12
|
28KB
|
731 lines
Frequently Asked Questions (FAQS);faqs.478
The weight of the rope and the weight is one-half as much again as the
difference between the weight of the weight and the weight of the weight
plus the weight of the monkey.
How long is the rope?
==> physics/monkey.s <==
The most difficult thing about this puzzle, as you probably expected,
is translating all the convoluted problem statements into equations...
the solution is pretty trivial after that. So...
Let:
m represent monkey
M represent mother of monkey
w represent weight
r represent rope
W[x] = present weight of x (x is m, M, w, or r)
A[x,t] = age of x at time t (x is M or M, t is one of T1 thru T4)
T1 = time at which mother is 3 times as old as monkey
T2 = time at which monkey is 3 times as old as mother at T1
T3 = time at which mother is half as old as monkey at T2
T4 = present time
For the ages, we have:
A[M,T1] = 3*A[m,T1]
A[m,T2] = 3*A[M,T1] = 9*A[m,T1]
A[M,T3] = A[m,T2]/2 = 9*A[m,T1]/2
A[m,T3] = A[m,T1] + (T3-T1)
= A[m,T1] + (A[M,T3]-A[M,T1])
= A[m,T1] + (9*A[M,T1]/2 - 3*A[m,T1])
= 5*A[m,T1]/2
A[M,T4] = 2*A[m,T3] = 5*A[m,T1]
A[m,T4] = A[m,T1] + (T4-T1)
= A[m,T1] + (A[M,T4]-A[M,T1])
= A[m,T1] + (5*A[m,T1] - 3*A[m,T1])
= 3*A[m,T1]
The present ages of monkey and mother sum to 4, so we have
A[m,T4] + A[M,T4] = 4
3*A[m,T1] + 5*A[m,T1] = 4
8*A[m,T1] = 4
A[m,T1] = 1/2
Thus:
A[M,T4] = 5/2
A[m,T4] = 3/2
Now for the weights, translating everything to ounces:
Monkey's weight in lbs = mother's age in years, so:
W[m] = 16*5/2 = 40
Weight and monkey are same weight, so:
W[w] = W[m] = 40
The last paragraph in the problem translates into:
W[r]+W[w] = (3/2)*((W[w]+W[m])-W[w])
W[r]+ 40 = (3/2)*(( 40 + 40 )- 40 )
W[r]+ 40 = 60
W[r] = 20
The rope weighs 4 ounces per foot, so its length is 5 feet.
==> physics/particle.p <==
What is the longest time that a particle can take in travelling between two
points if it never increases its acceleration along the way and reaches the
second point with speed V?
==> physics/particle.s <==
Assumptions:
1. x(0) = 0; x(T) = X
2. v(0) = 0; v(T) = V
3. d(a)/dt <= 0
Solution:
a(t) = constant = A = V^2/2X which implies T = 2X/V.
Proof:
Consider assumptions as they apply to f(t) = A * t - v(t):
1. integral from 0 to T of f = 0
2. f(0) = f(T) = 0
3. d^2(f)/dt^2 <= 0
From the mean value theorem, f(t) = 0. QED.
==> physics/pole.in.barn.p <==
Accelerate a pole of length l to a constant speed of 90% of the speed of
light (.9c). Move this pole towards an open barn of length .9l (90%
the length of the pole). Then, as soon as the pole is fully inside the
barn, close the door. What do you see and what actually happens?
==> physics/pole.in.barn.s <==
What the observer sees depends upon where the observer is, due to
the finite speed of light.
For definiteness, assume the forward end of the pole is marked "A" and
the after end is marked "B". Let's also assume there is a light source
inside the barn, and that the pole stops moving as soon as end "B" is
inside the barn.
An observer inside the barn next to the door will see the following
sequence of events:
1. End "A" enters the barn and continues toward the back.
2. End "B" enters the barn and stops in front of the observer.
3. The door closes.
4. End "A" continues moving and penetrates the barn at the far end.
5. End "A" stops outside the barn.
An observer at the other end of the barn will see:
1. End "A" enters the barn.
2. End "A" passes the observer and penetrates the back of the barn.
3. If the pole has markings on it, the observer will notice the part
nearest him has stopped moving. However, both ends are still
moving.
4. End "A" stops moving outside the barn.
5. End "B" continues moving until it enters the barn and then stops.
6. The door closes.
After the observers have subtracted out the effects of the finite speed
of light on what they see, both observers will agree on what happened:
The pole entered the barn; the door closed so that the pole was
completely contained within the barn; as the pole was being stopped it
elongated and penetrated the back wall of the barn.
Things are different if you are riding along with the pole. The pole
is never inside the barn since it won't fit. End A of the pole penetrates
the rear wall of the barn before the door is closed.
If the wall of the barn is impenetrable, in all the above scenarios insert
the wording "End A of the pole explodes" for "End A penetrates the barn."
==> physics/resistors.p <==
What are the resistances between lattices of resistors in the shape of a:
1. Cube
2. Platonic solid
3. Hypercube
4. Plane sheet
5. Continuous sheet
==> physics/resistors.s <==
1. Cube
The key idea is to observe that if you can show that two
points in a circuit must be at the same potential, then you can
connect them, and no current will flow through the connection and the
overall properties of the circuit remain unchanged. In particular, for
the cube, there are three resistors leaving the two "connection
corners". Since the cube is completely symmetrical with respect to the
three resistors, the far sides of the resistors may be connected
together. And so we end up with:
|---WWWWWW---| |---WWWWWW---|
| | | |
*--+---WWWWWW---+----+---WWWWWW---+---*
| | | |
|---WWWWWW---| |---WWWWWW---|
2. Platonic Solids
Same idea for 8 12 and 20, since you use the symmetry to identify
equi-potential points. The tetrahedron is hair more subtle:
*---|---WWWWWW---|---*
|\ /|
W W W W
W W W W
W W W W
| \ / |
\ || |
\ | /
\ W /
\ W / <-------
\ W /
\|/
+
By symmetry, the endpoints of the marked resistor are equi-potential. Hence
they can be connected together, and so it becomes a simple:
*---+---WWWWW---+----*
| |
+-WWW WWW-+
| |-| |
|-WWW WWW-|
3. Hypercube
Think of injecting a constant current I into the start vertex.
It splits (by symmetry) into n equal currents in the n arms; the current of
I/n then splits into I/n(n-1), which then splits into I/[n(n-1)(n-1)] and so
on till the halfway point, when these currents start adding up. What is the
voltage difference between the antipodal points? V = I x R; add up the voltages
along any of the paths:
n even: (n-2)/2
V = 2{I/n + I/(n(n-1)) + I/(n(n-1)(n-1)) + ... + I/(n(n-1) )}
n odd: (n-3)/2
V = 2{I/n + I/(n(n-1)) + I/(n(n-1)(n-1)) + ... + I/(n(n-1) )} (n-1)/2
+ I/(n(n-1) )
And R = V/I i.e. replace the Is in the above expression by 1s.
For the 3-cube: R = 2{1/3} + 1/(3x2) = 5/6 ohm
For the 4-cube: R = 2{1/4 + 1/(4x3)} = 2/3 ohm
This formula yields the resistance from root to root of
two (n-1)-ary trees of height n/2 with their end nodes identified
(-when n is even; something similar when n is odd).
Coincidentally, the 4-cube is such an animal and thus the answer
2/3 ohms is correct in that case.
However, it does not provide the solution for n >= 5, as the hypercube
does not have quite as many edges as were counted in the formula above.
4. The Infinite Plane
For an infinite lattice: First inject a constant current I at a point; figure
out the current flows (with heavy use of symmetry). Remove that current. Draw
out a current I from the other point of interest (or inject a negative current)
and figure out the flows (identical to earlier case, but displaced and in the
other direction). By the principle of superposition, if you inject a current I
into point a and take out a current I at point b at the same time, the currents
in the paths are simply the sum of the currents obtained in the earlier two
simpler cases. As in the n-cube, find the voltage between the points of
interest, divide by I and voila'!
As an illustration, in the adjacent points case: we have a current of I/4 in
each of the four resistors:
^ |
| v
<--o--> -->o<--
| ^
v |
(inject) (take out)
And adding the currents, we have I/2 in the resistor connecting the two points.
Therefore V=(1 ohm) x I/2 and effective resistance between the points = 1/2 ohm.
You can (and showed how to) use symmetry to obtain the equivalent resistance
of 1/2 between two adjacent nodes; but I doubt that symmetry alone will give you
even the equivalent resistance of 2/pi between two diagonally adjacent nodes.
[More generally, the equivalent resistance between two nodes k diagonal units
apart is (2/pi)(1+1/3+1/5+...+1/(2k-1)); that, plus symmetry and the known
equivalent resistance between two adjacent nodes, is sufficient to derive all
equivalent resistances in the lattice.
5. Continuous sheet
Doesn't the resistance diverge in that case? The problem is that you can't
inject current at a point.
cf. "Random Walks and Electric Networks", by Doyle and Snell, published by the
Mathematical Association of America.
==> physics/sail.p <==
A sailor is in a sailboat on a river. The water (current) is flowing
downriver at a velocity of 3 knots with respect to the land. The wind
(air velocity) is zero, with respect to the land. The sailor wants
to proceed downriver as quickly as possible, maximizing his downstream
speed with respect to the land.
Should he raise the sail, or not?
==> physics/sail.s <==
Depends on the sail. If the boat is square-rigged, then not, since
raising the sail will simply increase the air resistance.
If the sailor has a fore-and-aft rig, then he should, since he can then
tack into the wind. (Imagine the boat in still water with a 3-knot head
wind).
==> physics/skid.p <==
What is the fastest way to make a 90 degree turn on a slippery road?
==> physics/skid.s <==
For higher speeds (measured at a small distance from the point of initiation
of a sharp turn) the fastest way round is to "outside loop" - that is, steer
away from the curve, and do a kidding 270.
This technique is taught in advanced driving schools.
References:
M. Freeman and P. Palffy, American Journal of Physics, vol 50, p. 1098, 1982.
P. Palffy and Unruh, American Journal of Physics, vol 49, p. 685, 1981.
==> physics/spheres.p <==
Two spheres are the same size and weight, but one is hollow. They are
made of uniform material, though of course not the same material. Without
a minimum of apparatus, how can I tell which is hollow?
==> physics/spheres.s <==
Since the balls have equal diameter and equal mass, their volume and
density are also equal. However, the mass distribution is not equal,
so they will have different moments of inertia - the hollow sphere has
its mass concentrated at the outer edge, so its moment of inertia will
be greater than the solid sphere. Applying a known torque and observing
which sphere has the largest angular acceleration will determine which
is which. An easy way to do this is to "race" the spheres down an
inclined plane with enough friction to prevent the spheres from sliding.
Then, by conservation of energy:
mgh = 1/2 mv^2 + 1/2 Iw^2
Since the spheres are rolling without sliding, there is a relationship
between velocity and angular velocity:
w = v / r
so
mgh = 1/2 mv^2 + 1/2 I (v^2 / r^2) = 1/2 (m + I/r^2) v^2
and
v^2 = 2mgh / (m + I / r^2)
From this we can see that the sphere with larger moment of inertia (I) will
have a smaller velocity when rolled from the same height, if mass and radius
are equal with the other sphere. Thus the solid sphere will roll faster.
==> physics/wind.p <==
Is a round-trip by airplane longer or shorter if there is wind blowing?
==> physics/wind.s <==
It will take longer, by the ratio (s^2)/(s^2 - w^2) where s is
the plane's speed, and w is the wind speed. The stronger the wind the
longer it will take, up until the wind speed equals the planes speed, at
which point the plane will never finish the trip.
Math:
s = plane's speed
w = wind speed
d = distance in one direction
d / (s + w) = time to complete leg flying with the wind
d / (s - w) = time to complete leg flying against the wind
d / (s + w) + d / (s - w) = round trip time
d / (s + w) + d / (s - w) = ratio of flying with wind to
------------------------- flying with no wind (bottom of
d / s + d / s equation is top with w = 0)
this simplifies to s^2 / (s^2 - w^2).
==> probability/amoeba.p <==
A jar begins with one amoeba. Every minute, every amoeba
turns into 0, 1, 2, or 3 amoebae with probability 25%
for each case ( dies, does nothing, splits into 2, or splits
into 3). What is the probability that the amoeba population
eventually dies out?
==> probability/amoeba.s <==
If p is the probability that a single amoeba's descendants will die
out eventually, the probability that N amoebas' descendents will all
die out eventually must be p^N, since each amoeba is independent of
every other amoeba. Also, the probability that a single amoeba's
descendants will die out must be independent of time when averaged
over all the possibilities. At t=0, the probability is p, at t=1 the
probability is 0.25(p^0+p^1+p^2+p^3), and these probabilities must be
equal. Extinction probability p is a root of f(p)=p. In this case,
p = sqrt(2)-1.
The generating function for the sequence P(n,i), which gives the
probability of i amoebas after n minutes, is f^n(x), where f^n(x) ==
f^(n-1) ( f(x) ), f^0(x) == x . That is, f^n is the nth composition
of f with itself.
Then f^n(0) gives the probability of 0 amoebas after n minutes, since
f^n(0) = P(n,0). We then note that:
f^(n+1)(x) = ( 1 + f^n(x) + (f^n(x))^2 + (f^n(x))^3 )/4
so that if f^(n+1)(0) -> f^n(0) we can solve the equation.
The generating function also gives an expression for the expectation
value of the number of amoebas after n minutes. This is d/dx(f^n(x))
evaluated at x=1. Using the chain rule we get f'(f^(n-1)(x))*d/dx(f^(n-1)(x))
and since f'(1) = 1.5 and f(1) = 1, we see that the result is just
1.5^n, as might be expected.
==> probability/apriori.p <==
An urn contains one hundred white and black balls. You sample one hundred
balls with replacement and they are all white. What is the probability
that all the balls are white?
==> probability/apriori.s <==
This question cannot be answered with the information given.
In general, the following formula gives the conditional probability
that all the balls are white given you have sampled one hundred balls
and they are all white:
P(100 white | 100 white samples) =
P(100 white samples | 100 white) * P(100 white)
-----------------------------------------------------------
sum(i=0 to 100) P(100 white samples | i white) * P(i white)
The probabilities P(i white) are needed to compute this formula. This
does not seem helpful, since one of these (P(100 white)) is just what we
are trying to compute. However, the following argument can be made:
Before the experiment, all possible numbers of white balls from zero to
one hundred are equally likely, so P(i white) = 1/101. Therefore, the
odds that all 100 balls are white given 100 white samples is:
P(100 white | 100 white samples) =
1 / ( sum(i=0 to 100) (i/100)^100 ) =
63.6%
This argument is fallacious, however, since we cannot assume that the urn
was prepared so that all possible numbers of white balls from zero to one
hundred are equally likely. In general, we need to know the P(i white)
in order to calculate the P(100 white | 100 white samples). Without this
information, we cannot determine the answer.
This leads to a general "problem": our judgments about the relative
likelihood of things is based on past experience. Each experience allows
us to adjust our likelihood judgment, based on prior probabilities. This
is called Bayesian inference. However, if the prior probabilities are not
known, then neither are the derived probabilities. But how are the prior
probabilities determined? For example, if we are brains in the vat of a
diabolical scientist, all of our prior experiences are illusions, and
therefore all of our prior probabilities are wrong.
All of our probability judgments indeed depend upon the assumption that
we are not brains in a vat. If this assumption is wrong, all bets are
off.
==> probability/cab.p <==
A cab was involved in a hit and run accident at night. Two cab companies,
the Green and the Blue, operate in the city. Here is some data:
a) Although the two companies are equal in size, 85% of cab
accidents in the city involve Green cabs and 15% involve Blue cabs.
b) A witness identified the cab in this particular accident as Blue.
The court tested the reliability of the witness under the same circumstances
that existed on the night of the accident and concluded that the witness
correctly identified each one of the two colors 80% of the time and failed
20% of the time.
What is the probability that the cab involved in the accident was
Blue rather than Green?
If it looks like an obvious problem in statistics, then consider the
following argument:
The probability that the color of the cab was Blue is 80%! After all,
the witness is correct 80% of the time, and this time he said it was Blue!
What else need be considered? Nothing, right?
If we look at Bayes theorem (pretty basic statistical theorem) we
should get a much lower probability. But why should we consider statistical
theorems when the problem appears so clear cut? Should we just accept the
80% figure as correct?
==> probability/cab.s <==
The police tests don't apply directly, because according to the
wording, the witness, given any mix of cabs, would get the right
answer 80% of the time. Thus given a mix of 85% green and 15% blue
cabs, he will say 20% of the green cabs and 80% of the blue cabs are
blue. That's 20% of 85% plus 80% of 15%, or 17%+12% = 29% of all the
cabs that the witness will say are blue. Of those, only 12/29 are
actually blue. Thus P(cab is blue | witness claims blue) = 12/29.
That's just a little over 40%.
Think of it this way... suppose you had a robot watching parts on a
conveyor belt to spot defective parts, and suppose the robot made a
correct determination only 50% of the time (I know, you should
probably get rid of the robot...). If one out of a billion parts are
defective, then to a very good approximation you'd expect half your
parts to be rejected by the robot. That's 500 million per billion.
But you wouldn't expect more than one of those to be genuinely
defective. So given the mix of parts, a lot more than 50% of the
REJECTED parts will be rejected by mistake (even though 50% of ALL the
parts are correctly identified, and in particular, 50% of the
defective parts are rejected).
When the biases get so enormous, things starts getting quite a bit
more in line with intuition.
For a related real-life example of probability in the courtroom see
People v. Collins, 68 Cal 2d319 (1968).
==> probability/coincidence.p <==
Name some amazing coincidences.
==> probability/coincidence.s <==
The answer to the question, "Who wrote the Bible," is, of
course, Shakespeare. The King James Version was published in
1611. Shakespeare was 46 years old then (he turned 47 later in
the year). Look up Psalm 46. Count 46 words from the beginning of
the Psalm. You will find the word "Shake." Count 46 words from
the end of the Psalm. You will find the word "Spear." An obvious
coded message. QED.
How many inches in the pole-to-pole diameter of the Earth? The
answer is almost exactly 500,000,000 inches. Proof that the inch
was defined by spacemen.
==> probability/coupon.p <==
There is a free gift in my breakfast cereal. The manufacturers say
that the gift comes in four different colours, and encourage one to
collect all four (& so eat lots of their cereal). Assuming there is
an equal chance of getting any one of the colours, what is the
expected number of packets I must plough through to get all four?
Can you generalise to n colours and/or unequal probabilities?
Xref: bloom-picayune.mit.edu rec.puzzles:18149 news.answers:3080
Newsgroups: rec.puzzles,news.answers
Path: bloom-picayune.mit.edu!enterpoop.mit.edu!snorkelwacker.mit.edu!usc!wupost!gumby!destroyer!uunet!questrel!chris
From: uunet!questrel!chris (Chris Cole)
Subject: rec.puzzles FAQ, part 15 of 15
Message-ID: <puzzles-faq-15_717034101@questrel.com>
Followup-To: rec.puzzles
Summary: This posting contains a list of
Frequently Asked Questions (and their answers).
It should be read by anyone who wishes to
post to the rec.puzzles newsgroup.
Sender: chris@questrel.com (Chris Cole)
Reply-To: uunet!questrel!faql-comment
Organization: Questrel, Inc.
References: <puzzles-faq-1_717034101@questrel.com>
Date: Mon, 21 Sep 1992 00:09:54 GMT
Approved: news-answers-request@MIT.Edu
Expires: Sat, 3 Apr 1993 00:08:21 GMT
Lines: 1411
Archive-name: puzzles-faq/part15
Last-modified: 1992/09/20
Version: 3
==> probability/coupon.s <==
The problem is well known under the name of "COUPON COLLECTOR PROBLEM".
The answer for n equally likely coupons is
(1) C(n)=n*H(n) with H(n)=1+1/2+1/3+...+1/n.
In the unequal probabilities case, with p_i the probability of coupon i,
it becomes
(2) C(n)=int_0^infty [1-prod_{i=1}^n (1-exp(-p_i*t))] dt
which reduces to (1) when p_i=1/n (An easy exercise).
In the equal probabilities case C(n)~n log(n). For a Zipf law,
from (2), we have C(n)~n log^2(n).
A related problem is the "BIRTHDAY PARADOX" familiar to people
interested in hashing algorithms: With a party of 24 persons,
you are likely (i.e. with probability >50%) to find two with
the same birthday. The non equiprobable case was solved by:
M. Klamkin and D. Newman, Extensions of the birthday
surprise, J. Comb. Th. 3 (1967), 279-282.
==> probability/darts.p <==
Peter throws two darts at a dartboard, aiming for the center. The
second dart lands farther from the center than the first. If Peter now
throws another dart at the board, aiming for the center, what is the
probability that this third throw is also worse (i.e., farther from
the center) than his first? Assume Peter's skilfulness is constant.
==> probability/darts.s <==
Since the three darts are thrown independently,
they each have a 1/3 chance of being the best throw. As long as the
third dart is not the best throw, it will be worse than the first dart.
Therefore the answer is 2/3.
Ranking the three darts' results from A (best) to C
(worst), there are, a priori, six equiprobable outcomes.
possibility # 1 2 3 4 5 6
1st throw A A B B C C
2nd throw B C A C A B
3rd throw C B C A B A
The information from the first two throws shows us that the first
throw will not be the worst, nor the second throw the best. Thus
possibilities 3, 5 and 6 are eliminated, leaving three equiprobable
cases, 1, 2 and 4. Of these, 1 and 2 have the third throw worse than
the first; 4 does not. Again the answer is 2/3.
==> probability/flips.p <==
Consider a run of coin tosses: HHTHTTHTTTHTTTTHHHTHHHHHTHTTHT
Define a success as a run of one H or T (as in THT or HTH). Use two
different methods of sampling. The first method would consist of
sampling the above data by taking 7 groups of three. This translates
into the following sequences HHT, HTT, HTT, THT, TTT, HHH, and THH.
In this sample there was one success, THT. The second method of
sampling could be gotten by taking the groups in a serial sequence.
Another way of explaining the method would be to take the tosses 1, 2,
and 3 as the first set then tosses 2, 3, and 4 as the second set and
so on to produce seven samples. The samples would be HHT, HTH, THT,
HTT TTH, THT, and HTT, thus giving two success, HTH and THT. With
these two methods what would be the expected value and the standard
deviation for both methods?
==> probability/flips.s <==
Case 1:
Let us start with a simple case: Define success as T and consider
sequences of length 1. In this case, the two sampling techniques are
equivalent. Assuming that we are examining a truly random sequence of
T and H. Thus if n groups of single sequences are considered or a
sequence of length n is considered we will have the following
statistics on the number of successes:
Mean = n/2, and standard deviation (sd) = square_root(n)/2
Case 2:
Define success as HT or TH: Here the two sampling techniques need to
be distinguished:
Sampling 1: Take "n" groups of two: Here probability of getting success in
any group is 1/2 (TH and HT out of 4 possible cases). So the mean and the
standard deviation is same as described in case 1.
Sampling 2: Generate a sequence of length "n". It is easy to show
that (n-1) samples are generated. The number of successes in this
sequence is same as the number of T to H and H to T transitions. This
problem is solved in John P. Robinson, Transition Count and Syndrome
are Uncorrelated, IEEE Transactions on Information Theory, Jan 1988.
I will just state the results:
mean = (n-1)/2 and standard deviation = square_root(n-1)/2.
In fact, if you want "n" samples in this case you need to generate
length (n+1) sequence. Then the new mean and standard deviation are
the same as described in Sampling 1. (replace n by n+1). The
advantage in this sampling (i.e., sampling 2) is that you need only
generate a sequence length of (n+1) to obtain n sample points as
opposed to 2n (n groups of 2) in Sampling 1.
OBSERVATION: The statistics has not changed.
Case 3:
Define success as THT and HTH.
Sampling 1: This is a simple situation. Let us assume "n" samples
need to be generated. Therefore, "n" groups of three are generated.
The probability of a group being successful is 1/4 (THT and HTH out of
8 cases). Therefore mean and standard deviation are:
mean= n/4 and sd= square_root(7n)/4
Sampling 2: This is not a simple situation. Let us generate a random
sequence of length "n". There will be (n-2) samples in this case.
Use an approach similar to that followed in the Jan 88 IEEE paper. I
will just state the results. First we define a function or enumerator
P(n,k) as the number of length "n" sequences that generate "k"
successes. For example,
P(4,1)= 4 (HHTH, HTHH, TTHT, and THTT are 4 possible length 4 sequences).
I derived two generating functions g(x) and h(x) in order to enumerate
P(n,k), they are compactly represented by the following matrix
polynomial.
_ _ _ _ _ _
| g(x) | | 1 1 | (n-3) | 4 |
| | = | | | |
| h(x) | | 1 x | |2+2x |
|_ _| |_ _| |_ _|
The above is expressed as matrix generating function. It can be shown
that P(n,k) is the coefficient of the x^k in the polynomial
(g(x)+h(x)).
For example, if n=4 we get (g(x)+h(x)) from the matrix generating
function as (10+4x+2x^2). Clearly, P(4,1) (coefficient of x) is 4 and
P(4,2)=2 ( There are two such sequences THTH, and HTHT).
We can show that
mean(k) = (n-2)/4 and sd= square_root(5n-12)/4
If we want to compare Sampling 1 with Sampling 2 then in Sampling 2 we
need to generate "n" samples. This can be done by using sequences of length
(n+2). Then our new statistics would be
mean = n/4 (same as that in sampling 1)
sd = square_root(5n-2)/4 (not the same as in sampling 1)
sd in sampling 2 < sd in sampling 1.
This difference can be explained by the fact that in serial sampling
there is dependence on the past state. For example, if the past
sample was TTT then we know that the next sample can't be a success.
This was not the case in Case 1 or Case 2 (transition count). For
example, in case 2 success was independent of previous sample. That is
if my past sample was TT then my next sample can be TT or TH. The
probability of success in Case 1 and Case 2 is not influenced by past
history).
Similar approach can be followed for higher dimensional cases.
Here's a C program I had lying around that is relevant to the
current discussion of coin-flipping. The algorithm is N^3 (for N flips)
but it could certainly be improved. Compile with